relative-lipschitz loss
Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
In online convex optimization (OCO), Lipschitz continuity of the functions is commonly assumed in order to obtain sublinear regret. Moreover, many algorithms have only logarithmic regret when these functions are also strongly convex. Recently, researchers from convex optimization proposed the notions of relative strong convexity''. Both of the notions are generalizations of their classical counterparts. It has been shown that subgradient methods in the relative setting have performance analogous to their performance in the classical setting.
Review for NeurIPS paper: Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
First, the main class of losses that the paper introduces, that of relative Lipschitz continuity (Def. In particular, given that the losses are (RLC) then one can recover relative Lipschitz continuity via a direct combination of convexity and Cauchy-Schwartz inequality. Moreover, conversely every relative Lipschitz continuous loss can be seen as (RLC) if one chooses the respective Riemannian metric accordingly; this becomes even more evident for the example that the paper presents, if f(x) x {2} for x\in R, then one can straightforwardly choose the Riemannian metric in such a manner that the respective dual norm would be \ v\ _{x,\ast} v /x and (RLC) follows. That said, this weakens significantly the contributions concerning FTRL and the like, since in Antonakopoulos et. On the other hand, concerning the most intriguing part that of establishing logarithmic regret for the case where the loss functions are in addition relatively strongly convex, there is no obvious way to establish any relevant examples that satisfy simultaneously relative Lipschitz continuity and relative strong convexity, besides of course the euclidean ones.
Review for NeurIPS paper: Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
This paper treats the problem of online convex optimization without Lipschitz continuity of the loss functions. The authors consider a variant of Lipschitz continuity called "relative Lipschitz continuity": this notion is originally due to Lu (2019) and involves a Bregman divergence instead of the standard norm in comparing nearby points. In this context, the authors prove the following results: - Under only relative Lipschitz continuity: an O(sqrt{T}) regret bound for follow-the-regularized-leader (FTRL) and a "stabilized" variant of the online mirror descent (OMD) algorithm. These results are similar to standard bounds in the literature for Lipschitz continuous / strongly convex functions. The extension to *relative* Lipschitz continuous / strongly convex functions was welcomed by the reviewers, but two major issues were identified: 1. An earlier ICLR paper by Antonakopoulos et al. (2020) already provides O(\sqrt{T}) bounds for FTRL and OMD under a closely related "Riemannian Lipschitz continuity" condition.
Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses
In online convex optimization (OCO), Lipschitz continuity of the functions is commonly assumed in order to obtain sublinear regret. Moreover, many algorithms have only logarithmic regret when these functions are also strongly convex. Recently, researchers from convex optimization proposed the notions of relative Lipschitz continuity'' andrelative strong convexity''. Both of the notions are generalizations of their classical counterparts. It has been shown that subgradient methods in the relative setting have performance analogous to their performance in the classical setting.